Image Inpainting

Partial and Gated Convolution

  • partial convolution [1]: hard-gating single-channel unlearnable layer

  • gated convolution [2]: soft-gating multi-channel learnable layer

Filling Priority

filling priority [3]: Priority is the product of confidence term (a measure of the amount of reliable information surrounding the pixel) and data term (a function of the strength of isophotes hitting the front). Select the patch to be filled based on the priority, similar to patch-based texture synthesis.

<img src="http://bcmi.sjtu.edu.cn/~niuli/github_images/bO5YXEQ.jpg" width="40%"> 

Diverse image inpainting

  • random vector: use random vector to generate diverse and plausible outputs [6]

  • attribute vector: use target attribute values to guide image inpainting [7]

  • use autoregressive model: [11] [12]

Auxiliary Information

  • Semantics

    • enforce inpainted result to have expected semantics [8]
    • first inpaint semantic map and then use complete semantic map as guidance [9]
    • guide feature learning in the decoder [10]
    • semantic-aware attention [13]
  • Edges

    • Inpaint edge map and use complete edge map to help image inpainting [4] [5]

Frequency Domain

  • using frequency map as network input [14]
  • fourier convolution: LAMA[15])
  • wavelet [16]

Bridging Inpainting and Generation

Transformer

[12] [18] [19]

Diffusion Model

[20] [21] [22] [23]

References

  1. Liu, Guilin, et al. “Image inpainting for irregular holes using partial convolutions.” ECCV, 2018.
  2. Yu, Jiahui, et al. “Free-form image inpainting with gated convolution.” ICCV, 2019.
  3. Criminisi, Antonio, Patrick Pérez, and Kentaro Toyama. “Region filling and object removal by exemplar-based image inpainting.” TIP, 2004.
  4. Nazeri, Kamyar, et al. “Edgeconnect: Generative image inpainting with adversarial edge learning.” arXiv preprint arXiv:1901.00212 (2019).
  5. Xiong, Wei, et al. “Foreground-aware image inpainting.” CVPR, 2019.
  6. Zheng, Chuanxia, Tat-Jen Cham, and Jianfei Cai. “Pluralistic image completion.” CVPR, 2019.
  7. Chen, Zeyuan, et al. “High resolution face completion with multiple controllable attributes via fully end-to-end progressive generative adversarial networks.” arXiv preprint arXiv:1801.07632 (2018).
  8. Li, Yijun, et al. “Generative face completion.” CVPR, 2017.
  9. Song, Yuhang, et al. “Spg-net: Segmentation prediction and guidance network for image inpainting.” arXiv preprint arXiv:1805.03356 (2018).
  10. Liao, Liang, et al. “Guidance and evaluation: Semantic-aware image inpainting for mixed scenes.” arXiv preprint arXiv:2003.06877 (2020).
  11. Peng, Jialun, et al. “Generating Diverse Structure for Image Inpainting With Hierarchical VQ-VAE.” CVPR, 2021.
  12. Wan, Ziyu, et al. “High-Fidelity Pluralistic Image Completion with Transformers.” arXiv preprint arXiv:2103.14031 (2021).
  13. Liao, Liang, et al. “Image inpainting guided by coherence priors of semantics and textures.” CVPR, 2021.
  14. Roy, Hiya, et al. “Image inpainting using frequency domain priors.” arXiv preprint arXiv:2012.01832 (2020).
  15. Suvorov, Roman, et al. “Resolution-robust Large Mask Inpainting with Fourier Convolutions.” WACV (2021).
  16. Yu, Yingchen, et al. “WaveFill: A Wavelet-based Generation Network for Image Inpainting.” ICCV, 2021.
  17. Zhao, Shengyu, et al. “Large scale image completion via co-modulated generative adversarial networks.” ICLR (2021).
  18. Zheng, Chuanxia, et al. “Bridging global context interactions for high-fidelity image completion.” Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022.
  19. Li, Wenbo, et al. “Mat: Mask-aware transformer for large hole image inpainting.” Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022.
  20. Lugmayr, Andreas, et al. “Repaint: Inpainting using denoising diffusion probabilistic models.” CVPR, 2022.
  21. Rombach, Robin, et al. “High-resolution image synthesis with latent diffusion models.” CVPR, 2022.
  22. Li, Wenbo, et al. “SDM: Spatial Diffusion Model for Large Hole Image Inpainting.” arXiv preprint arXiv:2212.02963 (2022).
  23. Wang, Su, et al. “Imagen Editor and EditBench: Advancing and Evaluating Text-Guided Image Inpainting.” arXiv preprint arXiv:2212.06909 (2022).